29. Gradient Descent
Gradient Descent
In this lesson, we'll learn the principles and the math behind the gradient descent algorithm.
Gradient Descent
Gradient Calculation
In the last few videos, we learned that in order to minimize the error function, we need to take some derivatives. So let's get our hands dirty and actually compute the derivative of the error function. The first thing to notice is that the sigmoid function has a really nice derivative. Namely,
The reason for this is the following, we can calculate it using the quotient formula:

And now, let's recall that if we have
points labelled
the error formula is:
where the prediction is given by
Our goal is to calculate the gradient of
at a point
given by the partial derivatives
To simplify our calculations, we'll actually think of the error that each point produces, and calculate the derivative of this error. The total error, then, is the average of the errors at all the points. The error produced by each point is, simply,
In order to calculate the derivative of this error with respect to the weights, we'll first calculate
Recall that
so:

The last equality is because the only term in the sum which is not a constant with respect to
is precisely
which clearly has derivative
Now, we can go ahead and calculate the derivative of the error
at a point
with respect to the weight

A similar calculation will show us that

This actually tells us something very important. For a point with coordinates
label
and prediction
the gradient of the error function at that point is
In summary, the gradient is
If you think about it, this is fascinating. The gradient is actually a scalar times the coordinates of the point! And what is the scalar? Nothing less than a multiple of the difference between the label and the prediction. What significance does this have?
SOLUTION:
- Closer the label to the prediction, smaller the gradient.
- Farther the label from the prediction, larger the gradient.
So, a small gradient means we'll change our coordinates by a little bit, and a large gradient means we'll change our coordinates by a lot.
If this sounds anything like the perceptron algorithm, this is no coincidence! We'll see it in a bit.
Gradient Descent Step
Therefore, since the gradient descent step simply consists in subtracting a multiple of the gradient of the error function at every point, then this updates the weights in the following way:
which is equivalent to
Similarly, it updates the bias in the following way:
Note:
Since we've taken the average of the errors, the term we are adding should be
instead of
but as
is a constant, then in order to simplify calculations, we'll just take
to be our learning rate, and abuse the notation by just calling it